Overview
NVIDIA IGX™ is an industrial-grade, edge AI platform that combines enterprise-level hardware, software, and support. It’s purpose-built for industrial, robotics, and medical environments, delivering powerful AI compute, high-bandwidth sensor processing, enterprise security, and robot safety. The platform also comes with NVIDIA AI Enterprise and up to 10 years of support, so you can deliver AI safely and securely to support human-machine collaboration.
IGX is production-ready, combining industrial-grade hardware with enterprise software and support.
The NVIDIA IGX Thor Platform unlocks real-time sensor processing and AI reasoning for physical AI agents with industrial-grade hardware, enterprise software, and functional safety. It’s powered by the NVIDIA Blackwell Architecture iGPU and an optional dGPU, delivering up to 5581 FP4 TFLOPS of AI compute to effortlessly run multiple generative AI models at the edge. Compared to NVIDIA IGX Orin™, it provides up to 8x higher AI compute on iGPU, 2.5x higher AI compute on dGPU, and 2x better connectivity.
NVIDIA IGX Thor unifies high‑performance AI compute with functional safety for robots operating in dynamic environments. It supports both “inside‑out” safety using onboard sensors and “outside‑in” safety using infrastructure sensors. At its heart is a dedicated Functional Safety Island that provides an independent safety processor that isolates safety‑critical workloads — designed to meet ISO 26262 and IEC 61508 at ASIL D/SC3 and ASIL/SIL 2 levels.
NVIDIA AI Enterprise software powers edge AI on NVIDIA IGX Thor — ensuring mission-critical performance, security, and long-term enterprise support across the entire stack. It seamlessly runs generative AI — from VLA models like NVIDIA Isaac GR00T N to popular LLMs and VLMs such as NVIDIA Cosmos Reason — and brings cloud-native NVIDIA AI Enterprise and NVIDIA NIM™ to the edge. Applications include NVIDIA Isaac™ for robotics, Metropolis for visual AI, Holoscan for sensor processing, and NVIDIA Halos safety AI agents. This gives your organization a secure, stable, and high-performance experience for next-gen physical AI.
Purchase your NVIDIA IGX Thor and IGX Orin Developer Kits from one of our trusted distributors. If the options below don’t meet your needs, please contact us for additional information.
The NVIDIA IGX Thor™ Developer Kit accelerates industrial and medical edge innovation with real-time sensor processing and AI reasoning.
| IGX Thor Developer Kit | IGX Thor Developer Kit Mini | |
|---|---|---|
| dGPU | With NVIDIA RTX PRO™ 6000 Blackwell Max-Q Workstation Edition | – |
| AI Performance | Up to 5581 TFLOPS (FP4-Sparse) | 2070 TFLOPS (FP4-Sparse) |
| iGPU | 2560-core NVIDIA Blackwell architecture GPU with fifth-gen Tensor Cores Multi-Instance GPU (MIG) with 10 TPCs |
2560-core NVIDIA Blackwell architecture GPU with fifth-gen Tensor Cores Multi-Instance GPU (MIG) with 10 TPCs |
| iGPU Max Frequency | 1.57 GHz | 1.57 GHz |
| dGPU | 24,064-core NVIDIA Blackwell architecture GPU with fifth-gen Tensor Cores | – |
| CPU | 14-core Arm® Neoverse®-V3AE 64-bit CPU 64 KB I-Cache, 64 KB D-Cache 1 MB L2 Cache per core 16 MB Shared System L3 Cache | 14-core Arm® Neoverse®-V3AE 64-bit CPU 64 KB I-Cache, 64 KB D-Cache 1 MB L2 Cache per core 16 MB Shared System L3 Cache |
| CPU Max Frequency | 2.6 GHz | 2.6 GHz |
| Vision Accelerator | 1x PVA v3 | 1x PVA v3 |
| Memory | 128 GB 256-bit LPDDR5X | 273 GB/s 96 GB GDDR7 dGPU memory | 1792 GB/s |
128 GB 256-bit LPDDR5X 273 GB/s |
| Storage | 1 TB M.2 NVMe (PCIE Gen5 x2) | Primary Storage - 1TB M.2 M NVME (PCIe x4 Gen5) Secondary Storage: M2.M UFS3.1 |
| Video Encode | 2x NVEncode (iGPU) 4x NVEncode (dGPU) |
2x NVEncode |
| Video Decode | 2x NVDecode (iGPU) 4x NVDecode (dGPU)x |
2x NVDecode |
| PCIe* | 2x Gen5 PCIe (x8, x16) | M.2 Key M slot with x4 PCIe Gen5 (populated with 1 TB NVMe) M.2 Key E slot with x1 PCIe Gen5 (populated with WiFi 6e + Bluetooth Module) |
| USB* | USB 3.2 Gen2 Type-A stacked connectors USB 3.2 Gen2 Type-C connector |
2x USB Type-A 3.2 Gen2 2x USB Type-C 3.1 1x USB Type-C (Debug purpose only) |
| Networking* | 2x RJ45 (1GbE each) 2x QSFP112 (200GbE Each) Supports ConnectX-7 |
1x 5GBe RJ45 connector 1x QSFP28 (4x 25GbE) Wi-Fi 6E (Populated on M.2 Key E slot with x1 PCIe Gen5) |
| ConnectX Support | Yes | No |
| Display | 1x VESA DisplayPort 1.4a | 1x VESA DisplayPort 1.4a |
| Other I/O | 1x Line-Out and 1x MIC (Audio) | 2x FSI CAN Header 2x 6-pin automation header 2x 5-pin header JTAG Connector 1x 4-pin Fan Connector: 12 V, PWM, and Tach 2-pin RTC backup battery connector Microfit power jack Power, Force Recovery, and Reset buttons |
| BMC Support | Yes | No |
| Safety Support | Functional Safety Island on SoC Safety MCU on the carrier board | Functional Safety Island on the SoC Safety MCU on the carrier board |
| Power | 40 W–130 W TMP up to 300 W for dGPU |
40 W–130 W |
| Mechanical | 382.7 mm x 262.7 mm x 151.2 mm | 243.19 mm x 112.40 mm x 56.88 mm (height includes feet, carrier board, module, and thermal solution) |
The NVIDIA IGX Thor™ Production Systems make it easy for customers to go to production with our OEM partners. All NVIDIA Certified Systems will ensure enterprise customers get exceptional performance, reliability, and support from NVIDIA AI Enterprise Software.
| IGX T7000 | IGX T5000 | ||
|---|---|---|---|
| dGPU | With NVIDIA RTX PRO™ 6000 Blackwell Max-Q Workstation Edition | With NVIDIA RTX PRO™ 5000 Blackwell | – |
| AI Performance | up to 5581 TFLOPS (FP4-Sparse) | up to 4293 TFLOPS (FP4-Sparse) | up to 2070 TFLOPS (FP4 - sparse) |
| iGPU | 2560-core NVIDIA Blackwell architecture GPU with fifth-gen GEN Tensor cores Multi-Instance GPU with 10 TPCs | 2560-core NVIDIA Blackwell architecture GPU with fifth-gen GEN Tensor cores Multi-Instance GPU with 10 TPCs | 2560-core NVIDIA Blackwell architecture GPU with fifth-gen Tensor cores Multi-Instance GPU with 10 TPCs |
| iGPU Max Frequency | 1.57 GHz | 1.57 GHz | 1.57 GHz |
| dGPU | 24,064-core NVIDIA Blackwell architecture GPU with fifth-gen Tensor Cores | 14,080-core NVIDIA Blackwell architecture GPU with fifth-gen Tensor Cores | – |
| CPU | 14-core Arm® Neoverse®-V3AE 64-bit CPU 64 KB I-Cache, 64 KB D-Cache 1 MB L2 Cache per core 16 MB Shared System L3 Cache | 14-core Arm® Neoverse®-V3AE 64-bit CPU 64 KB I-Cache, 64 KB D-Cache 1 MB L2 Cache per core 16 MB Shared System L3 Cache | 14-core Arm® Neoverse®-V3AE 64-bit CPU 64 KB I-Cache, 64 KB D-Cache 1 MB L2 Cache per core 16 MB Shared System L3 Cache |
| CPU Max Frequency | 2.6 GHz | 2.6 GHz | 2.6 GHz |
| Vision Accelerator | 1x PVA v3 | 1x PVA v3 | 1x PVA v3 |
| Memory | 128 GB 256-bit LPDDR5X | 273 GB/s 96 GB GDDR7 dGPU memory | 1792 GB/s |
128 GB 256-bit LPDDR5X | 273 GB/s 48 GB GDDR7 dGPU memory | 1344 GB/s |
128GB 256-bit LPDDR5X 273GB/s |
| Storage | 4x SATA connectors M.2 Key M connector (PCIe Gen 5 x2) |
4x SATA connectors M.2 Key M connector (PCIe Gen 5 x2) |
Supports NVMe through PCIe Supports SSD through USB3.2 |
| Video Encode | 2x NVEncode (iGPU) 4x NVEncode (dGPU) |
2x NVEncode (iGPU) 3x NVEncode (dGPU) |
2x NVEncode |
| Video Decode | 2x NVDecode (iGPU) 4x NVDecode (dGPU) |
2x NVDecode (iGPU) 3x NVDecode (dGPU) |
2x NVDecode |
| PCIe* | 2x Gen5 PCIe (x8, x16) | 2x Gen5 PCIe (x8, x16) | Up to 8 lanes - Gen5 Root port only - C1 (x1) and C3 (x2) Root Point or Endpoint - C2 (x1), C4 (x8), and C5 (x4) |
| USB* | 1x USB3.2 Gen2 Type C connector 4x USB3.2 gen2 Type A connectors |
1x USB3.2 Gen2 Type C connector 4x USB3.2 gen2 Type-A connectors |
xHCI host controller with integrated PHY (up to) 3x USB 3.2 4x USB 2.0 |
| Networking* | 2x RJ45 (1GbE each) 2x QSFP112 (200GbE Each) Support for ConnectX-7 M.2 Key E connector (Wi-Fi/BT and 5g) M.2 Key B connector (cellular optional) |
2x RJ45 (1GbE each) 2x QSFP112 (200GbE Each) Supports ConnectX-7 M.2 Key E connector (Wi-Fi/BT and 5g) M.2 Key B connector (cellular optional) |
4x up to 25 Gbps MGBE WiFi 6E (Populated on M.2 Key E slot with x1 PCIe Gen5) |
| ConnectX Support | Yes | Yes | No |
| Display | 1x VESA DisplayPort 1.4a | 1x VESA DisplayPort 1.4a | 1x shared HDMI2.1 1x VESA DisplayPort 1.4a |
| Other I/O | 1x FSI CAN 2x USB header for optional connections via cable 4x COM Ports 1x LPT 1x TPM 2x CAN (1x T5000 CAN, 1x sMCU CAN) 1x GPIO 5x FAN (2x T5000, 2x System, 1x CX-7) |
1x FSI CAN 2x USB header for optional connections via cable 4x COM Ports 1x LPT 1x TPM 2x CAN (1x T5000 CAN, 1x sMCU CAN) 1x GPIO 5x FAN (2x T5000, 2x System, 1x CX-7) |
4x UART 4x CAN 3x SPI 13x I2C 6x PWM outputs |
| BMC Support | Yes | Yes | |
| Power | 40 W-130 W TMP up to 300 W for dGPU |
40 W-130 W TMP up to 300 W for dGPU |
40 W–130 W |
| Mechanical | 243.84 mm x 198.98 mm x 31.05 mm | 243.84 mm x 198.98 mm x 31.05 mm | 100 mm x 87 mm x 15.29 mm 699-pin B2B connector Integrated thermal transferplate (TTP) with heatpipe |
Accelerate edge AI application development with the NVIDIA IGX Orin Developer Kit, complete with chassis and power supply, for high-performance, industrial-grade edge AI.
| IGX Orin Developer Kit | |
|---|---|
| AI Performance | Up to 1705 TOPs with optional RTX 6000 Ada Generation GPU (Sparse) |
| SOM (System on Module) | GPU: 2,048-core NVIDIA Ampere architecture with 64 Tensor Cores CPU: 12-core Arm® Cortex®-A78AE v8.2 |
| NVIDIA ConnectX-7 | NVIDIA ConnectX-7 2x 100GbE 32-lane Gen 5 PCIe switch (x8 upstream, x16 Downstream, x8 Downstream) |
| Safety MCU (sMCU) | Infineon Aurix TC397 |
| NVIDIA BMC (Baseboard Management Controller) Module | Aspeed AST2600 Microchip ERoT |
| GPU Max Frequency | 1.185 GHz |
| CPU Max Frequency | 1.971 GHz |
| DL Accelerator | 2x NVDLA 2.0 Engines |
| DLA Max Frequency | 1.4GHz |
| Vision Accelerator | 1x PVA v2 |
| Memory | 64 GB 256-bit LPDDR5 204.8 GB/s |
| Storage | 500 GByte NVMe (PCIe Gen4) (4x SATA 6 Gbps expansion ports) |
| Optional Discrete GPU Card | NVIDIA RTX A6000 NVIDIA RTX 6000 Ada Generation |
| Wireless | 802.11 a/b/g/n/ac BT 5.0 |
| Video Encode | 2x 4K60 (H.265) 4x 4K30 (H.265) 8x 1,080p60 (H.265) 16x 1,080p30 (H.265) |
| Video Decode | 1x 8K30 (H.265) 3x 4K60 (H.265) 7x 4K30 (H.265) 11x 1,080p60 (H.265) 22x 1,080p30 (H.265) |
| HDMI-IN | HDMI 2.0b input (up to 4Kp60) |
| PCIe | 2x PCIe Gen5 expansion slots (from ConnectX-7 PCIe switch) x8 lanes within x16 physical connector x16 lanes within x16 physical connector |
| USB | 1x USB 3.2 Gen2 Type-C connector 4x USB 3.2 Gen2 Type-A connectors |
| Ethernet | 2x RJ45 (up to 1GbE) 2x QSFP28 ports (up to 100GbE per port) |
| Display | One DisplayPort 1.4a output |
| Audio (AU) | Two 3.5 mm AUX jack (MIC, line-out) |
| Power | 100 ~ 240 VAC 100 VAC: 8.5 A 240 VAC: 3.5 A |
| Mechanical | 262.7 x 382 x 151.2 mm 6,700 grams |
Customers can choose a full board kit or a System-on-Module for production-ready NVIDIA Certified Systems. All NVIDIA Certified Systems will ensure enterprise customers get performance, reliability, and support from NVIDIA AI Enterprise Software.
| IGX Orin 700 | ||
|---|---|---|
| Performance | Up to 1705 TOPs | |
| Specs | IGPU: 2048-core NVIDIA Ampere with 64 Tensor Core CPU: 12-core Arm Cortex A78AE CPU Memory: 64 GB LPDDR5 with ECC Storage: 64 GB eMMC |
|
| Power | Up to 125 W (without dGPU) 400 W (with dGPU) |
|
| IO Throughput | 2X100 Gb/s | |
| Optional dGPU | Yes | |
| Integrated Dual 100 Gb/s ConnectX-7 | Yes | |
| BMC | Yes | |
| Functional Safety | Yes | |
| Carrier Board Customization | No | |
| Product Lifecycle and Enterprise SW Support | 10 years (Until 2033) | |
| Value Prop | Same NVIDIA Certification Process to ensure Enterprise-level software support, AI safety, and functional safety for industries | |
Get started with more resources, including documentation and software downloads.